Recent advances in generative adversarial networks (GANs) have demonstrated the capabilities of generating stunning photo-realistic portrait images. While some prior works have applied such image GANs to unconditional 2D portrait video generation and static 3D portrait synthesis, there are few works successfully extending GANs for generating 3D-aware portrait videos. In this work, we propose PV3D, the first generative framework that can synthesize multi-view consistent portrait videos. Specifically, our method extends the recent static 3D-aware image GAN to the video domain by generalizing the 3D implicit neural representation to model the spatio-temporal space. To introduce motion dynamics to the generation process, we develop a motion generator by stacking multiple motion layers to generate motion features via modulated convolution. To alleviate motion ambiguities caused by camera/human motions, we propose a simple yet effective camera condition strategy for PV3D, enabling both temporal and multi-view consistent video generation. Moreover, PV3D introduces two discriminators for regularizing the spatial and temporal domains to ensure the plausibility of the generated portrait videos. These elaborated designs enable PV3D to generate 3D-aware motion-plausible portrait videos with high-quality appearance and geometry, significantly outperforming prior works. As a result, PV3D is able to support many downstream applications such as animating static portraits and view-consistent video motion editing. Code and models will be released at https://showlab.github.io/pv3d.
translated by 谷歌翻译
Sampling diverse programs from a code language model and reranking with model likelihood is a popular method for code generation but it is prone to preferring degenerate solutions. Inspired by collaborative programming, we propose Coder-Reviewer reranking. We augment Coder language models from past work, which generate programs given language instructions, with Reviewer models, which evaluate the likelihood of the instruction given the generated programs. We perform an extensive study across six datasets with eight models from three model families. Experimental results show that Coder-Reviewer reranking leads to consistent and significant improvement (up to 17% absolute accuracy gain) over reranking with the Coder model only. When combined with executability filtering, Coder-Reviewer reranking can often outperform the minimum Bayes risk method. Coder-Reviewer reranking is easy to implement by prompting, can generalize to different programming languages, and works well with off-the-shelf hyperparameters.
translated by 谷歌翻译
Our education system comprises a series of curricula. For example, when we learn mathematics at school, we learn in order from addition, to multiplication, and later to integration. Delineating a curriculum for teaching either a human or a machine shares the underlying goal of maximizing the positive knowledge transfer from early to later tasks and minimizing forgetting of the early tasks. Here, we exhaustively surveyed the effect of curricula on existing continual learning algorithms in the class-incremental setting, where algorithms must learn classes one at a time from a continuous stream of data. We observed that across a breadth of possible class orders (curricula), curricula influence the retention of information and that this effect is not just a product of stochasticity. Further, as a primary effort toward automated curriculum design, we proposed a method capable of designing and ranking effective curricula based on inter-class feature similarities. We compared the predicted curricula against empirically determined effectual curricula and observed significant overlaps between the two. To support the study of a curriculum designer, we conducted a series of human psychophysics experiments and contributed a new Continual Learning benchmark in object recognition. We assessed the degree of agreement in effective curricula between humans and machines. Surprisingly, our curriculum designer successfully predicts an optimal set of curricula that is effective for human learning. There are many considerations in curriculum design, such as timely student feedback and learning with multiple modalities. Our study is the first attempt to set a standard framework for the community to tackle the problem of teaching humans and machines to learn to learn continuously.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Recent research has demonstrated the capability of behavior signals captured by smartphones and wearables for longitudinal behavior modeling. However, there is a lack of a comprehensive public dataset that serves as an open testbed for fair comparison among algorithms. Moreover, prior studies mainly evaluate algorithms using data from a single population within a short period, without measuring the cross-dataset generalizability of these algorithms. We present the first multi-year passive sensing datasets, containing over 700 user-years and 497 unique users' data collected from mobile and wearable sensors, together with a wide range of well-being metrics. Our datasets can support multiple cross-dataset evaluations of behavior modeling algorithms' generalizability across different users and years. As a starting point, we provide the benchmark results of 18 algorithms on the task of depression detection. Our results indicate that both prior depression detection algorithms and domain generalization techniques show potential but need further research to achieve adequate cross-dataset generalizability. We envision our multi-year datasets can support the ML community in developing generalizable longitudinal behavior modeling algorithms.
translated by 谷歌翻译
从职位发布获得的汇总数据为劳动力市场需求,新兴技能以及援助工作匹配提供了有力的见解。但是,大多数提取方法受到监督,因此需要昂贵且耗时的注释。为了克服这一点,我们建议通过弱监督提取技巧。我们利用欧洲的技能,能力,资格和职业分类法,通过潜在代表来找到工作广告的类似技能。该方法根据令牌级别和句法模式显示了强烈的正信号,优于基准。
translated by 谷歌翻译
VQA是一项雄心勃勃的任务,旨在回答任何与图像有关的问题。但是,实际上,由于用户的需求不断更新,并且该系统必须实施新功能,因此很难为所有人构建这样的系统。因此,持续学习(CL)能力是开发高级VQA系统的必要条件。最近,先锋工作将一个VQA数据集分为不相交的答案集以研究此主题。但是,VQA上的CL不仅涉及标签集的扩展(新答案集)。在将VQA系统部署到新环境(新的视觉场景)以及如何回答需要新功能的问题(新问题类型)时,研究如何回答问题至关重要。因此,我们提出了Clove,这是一个在视觉问题答案上连续学习的基准,其中包含上述两个CL方案的场景和功能收入设置。在方法论方面,VQA和分类的CL之间的主要区别在于,前者还涉及扩大和防止忘记推理机制,而后者则集中在班级表示上。因此,我们提出了一种为CL上量身定制的基于无数据的基于Real-DATA的基于VQA上的方法,称为场景图作为符号重播的提示。它使用一段场景图作为提示,它可以重播伪场景图,以表示过去的图像以及相关的QA对。还提出了一个统一的VQA模型来利用当前和重播数据来增强其质量检查能力。最后,实验结果揭示了丁香的挑战,并证明了我们方法的有效性。数据集和代码将在https://github.com/showlab/clvqa上找到。
translated by 谷歌翻译
In the Metaverse, the physical space and the virtual space co-exist, and interact simultaneously. While the physical space is virtually enhanced with information, the virtual space is continuously refreshed with real-time, real-world information. To allow users to process and manipulate information seamlessly between the real and digital spaces, novel technologies must be developed. These include smart interfaces, new augmented realities, efficient storage and data management and dissemination techniques. In this paper, we first discuss some promising co-space applications. These applications offer opportunities that neither of the spaces can realize on its own. We then discuss challenges. Finally, we discuss and envision what are likely to be required from the database and system perspectives.
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
水生运动是生物学家和工程师感兴趣的经典流体结构相互作用(FSI)问题。求解完全耦合的FSI方程,用于不可压缩的Navier-Stokes和有限的弹性在计算上是昂贵的。在这种系统中,优化机器人游泳器设计通常涉及在已经昂贵的模拟之上繁琐的,无梯度的程序。为了应对这一挑战,我们提出了一种针对FSI的新颖,完全可区分的混合方法,该方法结合了2D直接数值模拟,用于游泳器的可变形固体结构和物理受限的神经网络替代物,以捕获流体的流体动力效应。对于游泳者身体的可变形实心模拟,我们使用来自计算机图形领域的最新技术来加快有限元方法(FEM)。对于流体模拟,我们使用经过基于物理损耗功能的U-NET体系结构来预测每个时间步骤的流场。使用沉浸式边界方法(IBM)在我们游泳器边界的边界周围采样了来自神经网络的压力和速度场输出,以准确有效地计算其游泳运动。我们证明了混合模拟器在2D Carangiform游泳器上的计算效率和可不同性。由于可怜性,该模拟器可用于通过基于直接梯度的优化浸入流体中的软体体系的控件设计。
translated by 谷歌翻译